feat: add first-class GLM (Zhipu AI) provider support#2
feat: add first-class GLM (Zhipu AI) provider support#2garyblankenship wants to merge 2 commits intovoocel:mainfrom
Conversation
ProviderCredentials discarded the entire provider config (including base_url) when api_key was empty, falling back to EnvCredentials which returned no base URL for custom providers. Requests then hit the default OpenAI endpoint instead of the configured one. Also fix ensureProviderSetup to try env var fallback before erroring when a config file exists but api_key is empty.
Wire up the native litellm GLM provider instead of routing through
the OpenAI provider, which incorrectly appends /v1/ to base URLs.
- Add "zhipu" to KnownProviderTypes (maps to "glm" protocol)
- Add ZHIPU_API_KEY / ZHIPU_BASE_URL environment variable support
- Add "glm" case in provider factory using litellm's registered
GLM provider with correct endpoint path handling
Configuration example:
"zhipu": {
"base_url": "https://api.z.ai/api/coding/paas/v4",
"models": ["glm-5.1", "glm-5", "glm-4.7"]
}
There was a problem hiding this comment.
Pull request overview
Adds first-class support for Zhipu AI / GLM by routing a new "glm" provider type through LiteLLM, exposing it via a "zhipu" provider key, and improving env-var credential fallback behavior.
Changes:
- Add
"glm"provider type handling via a LiteLLM-backed adapter in model creation. - Register
"zhipu"as a known provider that resolves to"glm"protocol and addZHIPU_API_KEY/ZHIPU_BASE_URLenv var support. - Adjust bootstrap credential checks to allow env-var fallback even when a config file exists but has an empty
api_key.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| internal/provider/provider.go | Adds GLM model creation path using LiteLLM provider client. |
| internal/config/settings.go | Adds "zhipu" -> "glm" mapping and Zhipu env var credential support; preserves config base URL when key comes from env. |
| internal/bootstrap/input.go | Allows env fallback for API key even when settings files exist but credentials aren’t configured in-file. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| @@ -18,6 +18,7 @@ var KnownProviderTypes = map[string]string{ | |||
| "openai": "openai", | |||
| "openrouter": "openrouter", | |||
| "gemini": "gemini", | |||
There was a problem hiding this comment.
provider.go now supports the provider type "glm", but KnownProviderTypes doesn't include a "glm" entry. If a user sets settings.provider to "glm" (mirroring how other providers work), ProviderType() will currently default to "openai" and route through the OpenAI client instead of the GLM/LiteLLM path. Consider adding "glm": "glm" here (and aligning env var lookup accordingly) so "glm" works as a first-class provider key, not only as an inferred type for "zhipu".
| "gemini": "gemini", | |
| "gemini": "gemini", | |
| "glm": "glm", |
| "openai": {"OPENAI_API_KEY", "OPENAI_BASE_URL"}, | ||
| "openrouter": {"OPENROUTER_API_KEY", "OPENROUTER_BASE_URL"}, | ||
| "gemini": {"GEMINI_API_KEY", "GEMINI_BASE_URL"}, | ||
| "zhipu": {"ZHIPU_API_KEY", "ZHIPU_BASE_URL"}, |
There was a problem hiding this comment.
EnvCredentials() only checks providerEnvVars, but providerEnvVars has a "zhipu" entry and no corresponding "glm" entry. This means settings.provider="glm" (or any flow that queries env creds for "glm") will never pick up ZHIPU_API_KEY / ZHIPU_BASE_URL. Consider adding a "glm" mapping (likely to the same env vars) or otherwise ensuring env fallback works for the provider type name as well.
| "zhipu": {"ZHIPU_API_KEY", "ZHIPU_BASE_URL"}, | |
| "zhipu": {"ZHIPU_API_KEY", "ZHIPU_BASE_URL"}, | |
| "glm": {"ZHIPU_API_KEY", "ZHIPU_BASE_URL"}, |
Summary
glmprovider, which correctly handles Zhipu's/api/paas/v4/chat/completionsendpoint path (the OpenAI provider incorrectly appends/v1/)"zhipu"as a known provider type (auto-resolves to"glm"protocol)ZHIPU_API_KEY/ZHIPU_BASE_URLenvironment variable fallbackConfiguration
{ "provider": "zhipu", "model": "glm-5.1", "providers": { "zhipu": { "base_url": "https://api.z.ai/api/coding/paas/v4", "models": ["glm-5.1", "glm-5", "glm-4.7", "glm-4.5-flash"], "small_model": "glm-4.5-flash" } } }API key via environment:
export ZHIPU_API_KEY=your-keyTest plan
ZHIPU_API_KEYenv var and verify chat completions work withglm-5.1api.z.ai(not OpenAI) by checking the error message on an invalid keygo build ./...andgo test ./...pass